Representing and synthesizing novel views in real-world dynamic scenes from casual monocular videos is a long-standing problem. Existing solutions typically approach dynamic scenes by applying geometry techniques or utilizing temporal information between several adjacent frames without considering the underlying background distribution in the entire scene or the transmittance over the ray dimension, limiting their performance on static and occlusion areas. Our approach $\textbf{D}$istribution-$\textbf{D}$riven neural radiance fields offers high-quality view synthesis and a 3D solution to $\textbf{D}$etach the background from the entire $\textbf{D}$ynamic scene, which is called $\text{D}^4$NeRF. Specifically, it employs a neural representation to capture the scene distribution in the static background and a 6D-input NeRF to represent dynamic objects, respectively. Each ray sample is given an additional occlusion weight to indicate the transmittance lying in the static and dynamic components. We evaluate $\text{D}^4$NeRF on public dynamic scenes and our urban driving scenes acquired from an autonomous-driving dataset. Extensive experiments demonstrate that our approach outperforms previous methods in rendering texture details and motion areas while also producing a clean static background. Our code will be released at https://github.com/Luciferbobo/D4NeRF.
translated by 谷歌翻译
In recent years, vision-centric perception has flourished in various autonomous driving tasks, including 3D detection, semantic map construction, motion forecasting, and depth estimation. Nevertheless, the latency of vision-centric approaches is too high for practical deployment (e.g., most camera-based 3D detectors have a runtime greater than 300ms). To bridge the gap between ideal research and real-world applications, it is necessary to quantify the trade-off between performance and efficiency. Traditionally, autonomous-driving perception benchmarks perform the offline evaluation, neglecting the inference time delay. To mitigate the problem, we propose the Autonomous-driving StreAming Perception (ASAP) benchmark, which is the first benchmark to evaluate the online performance of vision-centric perception in autonomous driving. On the basis of the 2Hz annotated nuScenes dataset, we first propose an annotation-extending pipeline to generate high-frame-rate labels for the 12Hz raw images. Referring to the practical deployment, the Streaming Perception Under constRained-computation (SPUR) evaluation protocol is further constructed, where the 12Hz inputs are utilized for streaming evaluation under the constraints of different computational resources. In the ASAP benchmark, comprehensive experiment results reveal that the model rank alters under different constraints, suggesting that the model latency and computation budget should be considered as design choices to optimize the practical deployment. To facilitate further research, we establish baselines for camera-based streaming 3D detection, which consistently enhance the streaming performance across various hardware. ASAP project page: https://github.com/JeffWang987/ASAP.
translated by 谷歌翻译
Copy-Paste is a simple and effective data augmentation strategy for instance segmentation. By randomly pasting object instances onto new background images, it creates new training data for free and significantly boosts the segmentation performance, especially for rare object categories. Although diverse, high-quality object instances used in Copy-Paste result in more performance gain, previous works utilize object instances either from human-annotated instance segmentation datasets or rendered from 3D object models, and both approaches are too expensive to scale up to obtain good diversity. In this paper, we revisit Copy-Paste at scale with the power of newly emerged zero-shot recognition models (e.g., CLIP) and text2image models (e.g., StableDiffusion). We demonstrate for the first time that using a text2image model to generate images or zero-shot recognition model to filter noisily crawled images for different object categories is a feasible way to make Copy-Paste truly scalable. To make such success happen, we design a data acquisition and processing framework, dubbed "X-Paste", upon which a systematic study is conducted. On the LVIS dataset, X-Paste provides impressive improvements over the strong baseline CenterNet2 with Swin-L as the backbone. Specifically, it archives +2.6 box AP and +2.1 mask AP gains on all classes and even more significant gains with +6.8 box AP +6.5 mask AP on long-tail classes.
translated by 谷歌翻译
The accurate detection and grasping of transparent objects are challenging but of significance to robots. Here, a visual-tactile fusion framework for transparent object grasping under complex backgrounds and variant light conditions is proposed, including the grasping position detection, tactile calibration, and visual-tactile fusion based classification. First, a multi-scene synthetic grasping dataset generation method with a Gaussian distribution based data annotation is proposed. Besides, a novel grasping network named TGCNN is proposed for grasping position detection, showing good results in both synthetic and real scenes. In tactile calibration, inspired by human grasping, a fully convolutional network based tactile feature extraction method and a central location based adaptive grasping strategy are designed, improving the success rate by 36.7% compared to direct grasping. Furthermore, a visual-tactile fusion method is proposed for transparent objects classification, which improves the classification accuracy by 34%. The proposed framework synergizes the advantages of vision and touch, and greatly improves the grasping efficiency of transparent objects.
translated by 谷歌翻译
实时音乐伴奏的生成在音乐行业(例如音乐教育和现场表演)中具有广泛的应用。但是,自动实时音乐伴奏的产生仍在研究中,并且经常在逻辑延迟和暴露偏见之间取决于权衡。在本文中,我们提出了Song Driver,这是一种无逻辑延迟或暴露偏见的实时音乐伴奏系统。具体而言,Songdriver将一个伴奏的生成任务分为两个阶段:1)安排阶段,其中变压器模型首先安排了和弦,以实时进行输入旋律,并在下一阶段加速了和弦,而不是播放它们。 2)预测阶段,其中CRF模型基于先前缓存的和弦生成了即将到来的旋律的可播放的多轨伴奏。通过这种两相策略,歌手直接生成即将到来的旋律的伴奏,从而达到了零逻辑延迟。此外,在预测时间步的和弦时,歌手是指第一阶段的缓存和弦,而不是其先前的预测,这避免了暴露偏见问题。由于输入长度通常在实时条件下受到限制,因此另一个潜在的问题是长期顺序信息的丢失。为了弥补这一缺点,我们在当前时间步骤作为全球信息之前从长期音乐作品中提取了四个音乐功能。在实验中,我们在一些开源数据集上训练歌手,以及由中国风格的现代流行音乐得分构建的原始\```````'''aisong数据集。结果表明,歌手在客观和主观指标上均优于现有的SOTA(最先进)模型,同时大大降低了物理潜伏期。
translated by 谷歌翻译
在这项比赛中,参与者将使用时间序列数据在教育背景下解决机器学习的两个基本因果挑战。首先是确定不同构造之间的因果关系,其中构造被定义为学习的最小要素。第二个挑战是预测学习一个结构对回答其他结构问题的能力的影响。应对这些挑战将使学生的知识获取优化,这可以部署在影响数百万学生的真正的edtech解决方案中。参与者将在理想化的环境中运行这些任务,并具有合成数据和现实情况,并通过一系列A/B测试收集的评估数据。
translated by 谷歌翻译
最近,基于CNN的RGB-D显着对象检测(SOD)在检测准确性方面取得了显着提高。但是,现有模型通常在效率和准确性方面表现良好。这阻碍了他们在移动设备以及许多实际问题上的潜在应用。在本文中,为了弥合RGB-D SOD的轻质和大型模型之间的准确性差距,一个有效的模块可以极大地提高准确性,但提出了很少的计算。受深度质量是影响准确性的关键因素的启发,我们提出了有效的深度质量启发的功能操纵(DQFM)过程,该过程可以根据深度质量动态滤波深度特征。提出的DQFM求助于低级RGB和深度特征的对齐,以及深度流的整体注意力,以明确控制和增强交叉模式融合。我们嵌入了DQFM,以获得一个称为DFM-NET的有效的轻质RGB-D SOD模型,此外,我们还设计了一个定制的深度骨架和两阶段解码器作为基本零件。 9个RGB-D数据集的广泛实验结果表明,我们的DFM-NET优于最近的有效型号,在CPU上以约20 fps的速度运行,仅8.5mb型号大小,同时快2.9/2.4倍,比6.7/3.1倍,小于6.7/3.1倍最新的最佳型号A2DELE和手机。与非效率模型相比,它还保持最先进的准确性。有趣的是,进一步的统计数据和分析验证了DQFM在没有任何质量标签的各种品质的深度图中的能力。最后但并非最不重要的一点是,我们进一步应用DFM-NET来处理视频SOD(VSOD),与最近的有效模型相比,相当的性能,而比该领域的先前最佳状态的速度/2.3倍/小2.3倍。我们的代码可在https://github.com/zwbx/dfm-net上找到。
translated by 谷歌翻译
因果推断对于跨业务参与,医疗和政策制定等领域的数据驱动决策至关重要。然而,关于因果发现的研究已经与推理方法分开发展,从而阻止了两个领域方法的直接组合。在这项工作中,我们开发了深层端到端因果推理(DECI),这是一种基于流动的非线性添加噪声模型,该模型具有观察数据,并且可以执行因果发现和推理,包括有条件的平均治疗效果(CATE) )估计。我们提供了理论上的保证,即DECI可以根据标准因果发现假设恢复地面真实因果图。受应用影响的激励,我们将该模型扩展到具有缺失值的异质,混合型数据,从而允许连续和离散的治疗决策。我们的结果表明,与因果发现的相关基线相比,DECI的竞争性能和(c)在合成数据集和因果机器学习基准测试基准的一千多个实验中,跨数据类型和缺失水平进行了估计。
translated by 谷歌翻译
预训练在高级计算机视觉中标志着众多艺术状态,但曾经有很少的尝试调查图像处理系统中的预训练方式。在本文中,我们对图像预培训进行了深入研究。在实用价值考虑到实际价值的实际基础进行本研究,我们首先提出了一种通用,经济高效的变压器的图像处理框架。它在一系列低级任务中产生了高度竞争的性能,但在约束参数和计算复杂性下。然后,基于此框架,我们设计了一整套原则性的评估工具,认真对待和全面地诊断不同任务的图像预训练,并揭示其对内部网络表示的影响。我们发现预训练在低级任务中发挥着惊人的不同角色。例如,预训练将更多本地信息引入超级分辨率(SR)的更高层数,产生显着的性能增益,而预培训几乎不会影响去噪的内部特征表示,导致稍微收益。此外,我们探索了不同的预训练方法,揭示了多任务预训练更有效和数据效率。所有代码和模型将在https://github.com/fenglinglwb/edt发布。
translated by 谷歌翻译
受益于生成对抗性网络(GAN)的发展,面部操纵最近在学术界和工业中取得了重大进展。它激发了越来越多的娱乐应用,但也遭到对个人隐私甚至政治安全的严重威胁。为了减轻这种风险,已经提出了许多对策。然而,大多数方法以被动方式设计,这是为了检测它们在广泛传播之后是否篡改了面部图像或视频。这些基于检测的方法具有致命的限制,即它们仅适用于前后的取证,但不能阻止对恶意行为的发挥作用。为了解决限制,在本文中,我们提出了一种新颖的倡议防御框架,以降低恶意用户控制的面部操纵模型的性能。基本思想是在操纵之前将难以察觉的毒液纳入目标面部数据。为此,我们首先使用替代模型模仿目标操纵模型,然后设计毒药扰动发生器以获得所需的毒液。交替的培训策略进一步利用以培训代理模型和扰动发生器。两个典型的面部操纵任务:面部属性编辑和面部重新制定,在我们的倡议防御框架中考虑。广泛的实验证明了我们在不同环境中框架的有效性和稳健性。最后,我们希望这项工作能够在针对更多对抗方案的主动对策上阐明一些灯。
translated by 谷歌翻译